Investigation of Parallel Computations: Machines, Tools and Programs

نویسنده

  • Roberto Togneri
چکیده

Parallel and distributed computing are proving to be the next logical step for high performance computing. As single processor systems approach the limits of current VLSI technology, multiple processor systems will present a feasible architecture for scalable high performance computing and resource utilisation. Furthermore as workstations are becoming more diiuse and more reliant on network technology, distributed computing presents an attractive means of optimal utilisation of networked resources. In this report both multi-processor and distributed computing systems are studied in the parallelisation of the VA, Chirp, LBG and Move-Means algorithms common in digital signal processing research. Multi-processors are shown to provide a more friendly development environment due to the shared memory programming model and better eeciency due to low communication latencies. A workstation cluster is shown to be harder to use for application programmers due to the message-passing model and parallelisation eeciency is lower due to the high communication latencies. The message-passing paradigm both facilitates and enforces optimal parallelisation considerations (like data locality) at the cost of greater eeort. The Chirp and LBG algorithms parallelise well on both architectures, but the VA and Move-Means are poor candidates for parallelisation. Issues like development environment, program analysis and debugging, task allocation, migration and load balancing and resource utilisation are shown to be important factors in the use of parallel and distributed computing system.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Illinois Concert System: Programming Support for Irregular Parallel Applications

Irregular applications are critical to supporting grand challenge applications on massively parallel machines and extending the utility of those machines beyond the scientiic computing domain. The dominant parallel programmingmodels, data parallel and explicit message passing, provide little support for programming irregular applications. We articulate a set of requirements for supporting irreg...

متن کامل

Chare Kernel - a Runtime Support System for Parallel Computations

| This paper presents the chare kernel system, which supports parallel computations with irregular structure. The chare kernel is a collection of primitive functions that manage chares, manipulate messages, invoke atomic computations, and coordinate concurrent activities. Programs written in the chare kernel language can be executed on diierent parallel machines without change. Users writing su...

متن کامل

Mechanisms for Just-in-Time Allocation of Resources to Adaptive Parallel Programs

Adaptive parallel computations—computations that can adapt to changes in resource availability and requirement— can effectively use networked machines because they dynamically expand as machines become available and dynamically acquire machines as needed. While most parallel programming systems provide the means to develop adaptive programs, they do not provide any functional interface to exter...

متن کامل

Heuristic approach to solve hybrid flow shop scheduling problem with unrelated parallel machines

In hybrid flow shop scheduling problem (HFS) with unrelated parallel machines, a set of n jobs are processed on k machines. A mixed integer linear programming (MILP) model for the HFS scheduling problems with unrelated parallel machines has been proposed to minimize the maximum completion time (makespan). Since the problem is shown to be NP-complete, it is necessary to use heuristic methods to ...

متن کامل

A MATLAB-Based Code Generator for Parallel Sparse Matrix Computations Utilizing PSBLAS

Parallel programs for distributed memory machines are not easy to create and maintain, especially when they involve sparse matrix computations. In this paper, we propose a program translation system for generating parallel sparse matrix computation codes utilizing PSBLAS. The purpose of the development of the system is to offer the user a convenient way to construct parallel sparse code based o...

متن کامل

Solving the Problem of Scheduling Unrelated Parallel Machines with Limited Access to Jobs

Nowadays, by successful application of on time production concept in other concepts like production management and storage, the need to complete the processing of jobs in their delivery time is considered a key issue in industrial environments. Unrelated parallel machines scheduling is a general mood of classic problems of parallel machines. In some of the applications of unrelated parallel mac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994